Qwen2.5 MOE 2X1.5B DeepSeek Uncensored Censored 4B Gguf
Apache-2.0
This is a Qwen2.5 MOE (Mixture of Experts) model, composed of two Qwen 2.5 DeepSeek (censored/regular and uncensored) 1.5B models, forming a 4B model where the uncensored version of DeepSeek Qwen 2.5 1.5B dominates the model's behavior.
Large Language Model Supports Multiple Languages